10 research outputs found

    Artificial virtuous agents in a multi‐agent tragedy of the commons

    Get PDF
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents (AMAs), it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents (AVAs) in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a technical implementation in a moral simulation based on a tragedy of the commons scenario. The experimental results show how the AVAs learn to tackle cooperation problems while exhibiting core features of their theoretical counterpart, including moral character, dispositional virtues, learning from experience, and the pursuit of eudaimonia. Ultimately, we argue that virtue ethics provides a compelling path toward morally excellent machines and that our work provides an important starting point for such endeavors

    The Morality of Artificial Friends in Ishiguro’s Klara and the Sun

    Get PDF
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) view “from within,”including the standard (or “metaphysical”) perspective on moral agency, and the (2) view “from outside,” which includes behaviorism, functionalism and the social-relational perspective. Importantly, while the story illustrates both views, it exposes the epistemological vulnerability of the first in relation to the practical and social reality imposed by the second. That is, regardless of what metaphysical properties the Artificial Friend Klara can be said to have (from within), her moral status as well as agency ultimately depend on the views of others (from outside), including the others’ own epistemic beliefs about the nature of consciousness and personhood

    Persistent homology and the shape of evolutionary games

    No full text
    For nearly three decades, spatial games have produced a wealth of insights to the study of behavior and its relation to population structure. However, as different rules and factors are added or altered, the dynamics of spatial models often become increasingly complicated to interpret. To tackle this problem,we introduce persistent homology as a rigorous framework that can be used to both define and compute higher-order features of data in a manner which is invariant to parameter choices, robust to noise, and independent of human observation. Our work demonstrates its relevance for spatial games by showing how topological features of simulation data that persist over different spatial scales reflect the stability of strategies in 2D lattice games. To do so, we analyze the persistent homology of scenarios from two games: a Prisoner’s Dilemma and a SIRS epidemic model. The experimental results show how the method accurately detects features that correspond to real aspects of the game dynamics. Unlike other tools that study dynamics of spatial systems, persistent homology can tell us something meaningful about population structure while remaining neutral about the underlying structure itself. Regardless of game complexity, since strategies either succeed or fail to conform to shapes of a certain topology there is much potential for the method to provide novel insights for a wide variety of spatially extended systems in biology, social science, and physics

    The Use and Abuse of Normative Ethics for Moral Machines

    No full text
    How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper

    Artificial virtuous agents : from theory to machine implementation

    Get PDF
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we critically explore the possibilities and challenges for virtue ethics from a computational perspective. Drawing on previous conceptual and technical work, we outline a version of artificial virtue based on moral functionalism, connectionist bottom–up learning, and eudaimonic reward. We then describe how core features of the outlined theory can be interpreted in terms of functionality, which in turn informs the design of components necessary for virtuous cognition. Finally, we present a comprehensive framework for the technical development of artificial virtuous agents and discuss how they can be implemented in moral environments

    Interdisciplinary Confusion and Resolution in the Context of Moral Machines

    Get PDF
    Recent advancements in artificial intelligence (AI) have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to alleviate these issues, both on a practical and theoretical level of analysis. First, we describe two approaches to machine ethics: the philosophical approach and the engineering approach and show how tensions between the two arise due to discipline specific practices and aims. Using the concept of disciplinary capture, we then discuss potential promises and pitfalls to cross-disciplinary collaboration. Drawing on recent work in philosophy of science, we finally describe how metacognitive scaffolds can be used to avoid epistemological obstacles and foster innovative collaboration in AI ethics in general and machine ethics in particular

    The Morality of Artificial Friends in Ishiguro’s Klara and the Sun

    Get PDF
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) view “from within,” including the standard (or “metaphysical”) perspective on moral agency, and the (2) view “from outside,” which includes behaviorism, functionalism and the social-relational perspective. Importantly, while the story illustrates both views, it exposes the epistemological vulnerability of the first in relation to the practical and social reality imposed by the second. That is, regardless of what metaphysical properties the Artificial Friend Klara can be said to have (from within), her moral status as well as agency ultimately depend on the views of others (from outside), including the others’ own epistemic beliefs about the nature of consciousness and personhood

    Towards a Transrelational Theory of Moral Agency

    No full text
    This paper presents the foundations of a transrelational theory of moral agency, which extends the relational view with the idea that moral relationships include as well as transcend the descriptive criteria – e.g., in virtue of 1st person properties or 3rd person observables – of whatever makes someone a moral agent. The result is a holistic picture that captures the complex interrelationship between what we know of and owe each-other, and helps to explain whether – in what way and to what extent – a human, non-human animal, or artificial entity is or ought to be a moral agent in cases where other theories fall short

    Assessing the Time Efficiency of Ethical Algorithms

    No full text
    Artificial moral agents must not only be able to make competent ethical decisions, but they must do so effectively. This paper explores how ethical theory and algorithmic design impact computational efficiency by assessing the time cost of ethical algorithms. We create a model of an ethical environmentand conduct experiments on three different ethical algorithms in order to compare computational benefits and disadvantages of deontology and consequentialism respectively. The experimental results highlight the close relationship between ethical theory, algorithmic design, and resource costs, and our work provides an important starting-point for the further examination of these relations. Lastly, we introduce the concept of moral tractability as a venue for future work
    corecore